183 research outputs found
Blind Demixing for Low-Latency Communication
In the next generation wireless networks, lowlatency communication is
critical to support emerging diversified applications, e.g., Tactile Internet
and Virtual Reality. In this paper, a novel blind demixing approach is
developed to reduce the channel signaling overhead, thereby supporting
low-latency communication. Specifically, we develop a low-rank approach to
recover the original information only based on a single observed vector without
any channel estimation. Unfortunately, this problem turns out to be a highly
intractable non-convex optimization problem due to the multiple non-convex
rankone constraints. To address the unique challenges, the quotient manifold
geometry of product of complex asymmetric rankone matrices is exploited by
equivalently reformulating original complex asymmetric matrices to the
Hermitian positive semidefinite matrices. We further generalize the geometric
concepts of the complex product manifolds via element-wise extension of the
geometric concepts of the individual manifolds. A scalable Riemannian
trust-region algorithm is then developed to solve the blind demixing problem
efficiently with fast convergence rates and low iteration cost. Numerical
results will demonstrate the algorithmic advantages and admirable performance
of the proposed algorithm compared with the state-of-art methods.Comment: 14 pages, accepted by IEEE Transaction on Wireless Communicatio
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods
Performance Evaluation and Modeling of HPC I/O on Non-Volatile Memory
HPC applications pose high demands on I/O performance and storage capability.
The emerging non-volatile memory (NVM) techniques offer low-latency, high
bandwidth, and persistence for HPC applications. However, the existing I/O
stack are designed and optimized based on an assumption of disk-based storage.
To effectively use NVM, we must re-examine the existing high performance
computing (HPC) I/O sub-system to properly integrate NVM into it. Using NVM as
a fast storage, the previous assumption on the inferior performance of storage
(e.g., hard drive) is not valid any more. The performance problem caused by
slow storage may be mitigated; the existing mechanisms to narrow the
performance gap between storage and CPU may be unnecessary and result in large
overhead. Thus fully understanding the impact of introducing NVM into the HPC
software stack demands a thorough performance study.
In this paper, we analyze and model the performance of I/O intensive HPC
applications with NVM as a block device. We study the performance from three
perspectives: (1) the impact of NVM on the performance of traditional page
cache; (2) a performance comparison between MPI individual I/O and POSIX I/O;
and (3) the impact of NVM on the performance of collective I/O. We reveal the
diminishing effects of page cache, minor performance difference between MPI
individual I/O and POSIX I/O, and performance disadvantage of collective I/O on
NVM due to unnecessary data shuffling. We also model the performance of MPI
collective I/O and study the complex interaction between data shuffling,
storage performance, and I/O access patterns.Comment: 10 page
Linear-time Temporal Logic guided Greybox Fuzzing
Software model checking is a verification technique which is widely used for
checking temporal properties of software systems. Even though it is a property
verification technique, its common usage in practice is in "bug finding", that
is, finding violations of temporal properties. Motivated by this observation
and leveraging the recent progress in fuzzing, we build a greybox fuzzing
framework to find violations of Linear-time Temporal Logic (LTL) properties.
Our framework takes as input a sequential program written in C/C++, and an
LTL property. It finds violations, or counterexample traces, of the LTL
property in stateful software systems; however, it does not achieve
verification. Our work substantially extends directed greybox fuzzing to
witness arbitrarily complex event orderings. We note that existing directed
greybox fuzzing approaches are limited to witnessing reaching a location or
witnessing simple event orderings like use-after-free. At the same time,
compared to model checkers, our approach finds the counterexamples faster,
thereby finding more counterexamples within a given time budget.
Our LTL-Fuzzer tool, built on top of the AFL fuzzer, is shown to be effective
in detecting bugs in well-known protocol implementations, such as OpenSSL and
Telnet. We use LTL-Fuzzer to reproduce known vulnerabilities (CVEs), to find 15
zero-day bugs by checking properties extracted from RFCs (for which 10 CVEs
have been assigned), and to find violations of both safety as well as liveness
properties in real-world protocol implementations. Our work represents a
practical advance over software model checkers -- while simultaneously
representing a conceptual advance over existing greybox fuzzers. Our work thus
provides a starting point for understanding the unexplored synergies between
software model checking and greybox fuzzing.Comment: To appear in International Conference on Software Engineering (ICSE)
202
Functional Neural Changes and Altered Cortical–Subcortical Connectivity Associated with Recovery from Internet Gaming Disorder
Background and aims: Although studies have suggested that individuals with Internet gaming disorder (IGD) may have impairments in cognitive functioning, the nature of the relationship is unclear given that the information is typically derived from cross-sectional studies. Methods: Individuals with active IGD (n = 154) and those individuals no longer meeting criteria (n = 29) after 1 year were examined longitudinally using functional magnetic resonance imaging during performance of cue-craving tasks. Subjective responses and neural correlates were contrasted at study onset and at 1 year. Results: Subjects’ craving responses to gaming cues decreased significantly at 1 year relative to study onset. Decreased brain responses in the anterior cingulate cortex (ACC) and lentiform nucleus were observed at 1 year relative to onset. Significant positive correlations were observed between changes in brain activities in the lentiform nucleus and changes in self-reported cravings. Dynamic causal modeling analysis showed increased ACC–lentiform connectivity at 1 year relative to study onset. Conclusions: After recovery from IGD, individuals appear less sensitive to gaming cues. This recovery may involve increased ACC-related control over lentiform-related motivations in the control over cravings. The extent to which cortical control over subcortical motivations may be targeted in treatments for IGD should be examined further
- …